|
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The concept implies a separate data packet queue (or job queue) for each traffic flow (or for each program process) as opposed to the traditional approach with one FIFO queue for all packet flows (or for all process jobs). The purpose is to achieve fairness when a limited resource is shared, for example to avoid that flows with large packets (or processes that generate small jobs) achieve more throughput (or CPU time) than other flows (or processes). Fair queuing is implemented in some advanced packet switches and routers. == History == The term "fair queuing" was coined by John Nagle in 1985 while proposing round-robin scheduling in the gateway between a local area network and the internet to reduce network disruption from badly-behaving hosts〔〔. ''Nagle presented his "fair queuing" scheme, in which gateways maintain separate queues for each sending host. In this way, hosts with pathological implementations can not usurp more than their fair share of the gateway’s resources. This invoked spirited and interested discussion.''〕 A byte-weighted version was proposed by A. Demers, S. Keshav and S. Shenker in 1989, and was based on the earlier Nagle fair queuing algorithm. The byte-weighted fair queuing algorithm aims to mimic a bit-per-bit multiplexing by computing theoretical departure date for each packet. The concept has been further developed into weighted fair queuing, and the more general concept of traffic shaping, where queuing priorities are dynamically controlled to achieve desired flow quality of service goals or accelerate some flows (see net neutrality). 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Fair queuing」の詳細全文を読む スポンサード リンク
|